30 research outputs found

    Word Sense Disambiguation using a Bidirectional LSTM

    Full text link
    In this paper we present a clean, yet effective, model for word sense disambiguation. Our approach leverage a bidirectional long short-term memory network which is shared between all words. This enables the model to share statistical strength and to scale well with vocabulary size. The model is trained end-to-end, directly from the raw text to sense labels, and makes effective use of word order. We evaluate our approach on two standard datasets, using identical hyperparameter settings, which are in turn tuned on a third set of held out data. We employ no external resources (e.g. knowledge graphs, part-of-speech tagging, etc), language specific features, or hand crafted rules, but still achieve statistically equivalent results to the best state-of-the-art systems, that employ no such limitations

    An algorithm for data-driven shifting bottleneck detection

    Get PDF
    Manufacturing companies continuously capture shop floor information using sensors technologies, Manufacturing Execution Systems (MES), Enterprise Resource Planning systems. The volumes of data collected by these technologies are growing and the pace of that growth is accelerating. Manufacturing data is constantly changing but immediately relevant. Collecting and analysing them on a real-time basis can lead to increased productivity. Particularly, prioritising improvement activities such as cycle time improvement, setup time reduction and maintenance activities on bottleneck machines is an important part of the operations management process on the shop floor to improve productivity. The first step in that process is the identification of bottlenecks. This paper introduces a purely data-driven shifting bottleneck detection algorithm to identify the bottlenecks from the real-time data of the machines as captured by MES. The developed algorithm detects the current bottleneck at any given time, the average and the non-bottlenecks over a time interval. The algorithm has been tested over real-world MES data sets of two manufacturing companies, identifying the potentials and the prerequisites of the data-driven method. The main prerequisite of the proposed data-driven method is that all the states of the machine should be monitored by MES during the production run

    A data-driven algorithm to predict throughput bottlenecks in a production system based on active periods of the machines

    Get PDF
    Smart manufacturing is reshaping the manufacturing industry by boosting the integration of information and communication technologies and manufacturing process. As a result, manufacturing companies generate large volumes of machine data which can be potentially used to make data-driven operational decisions using informative computerized algorithms. In the manufacturing domain, it is well-known that the productivity of a production line is constrained by throughput bottlenecks. The operational dynamics of the production system causes the bottlenecks to shift among the production resources between the production runs. Therefore, prediction of the throughput bottlenecks of future production runs allows the production and maintenance engineers to proactively plan for resources to effectively manage the bottlenecks and achieve higher throughput. This paper proposes an active period based data-driven algorithm to predict throughput bottlenecks in the production system for the future production run from the large sets of machine data. To facilitate the prediction, we employ an auto-regressive integrated moving average (ARIMA) method to predict the active periods of the machine. The novelty of the work is the integration of ARIMA methodology with the data-driven active period technique to develop a bottleneck prediction algorithm. The proposed prediction algorithm is tested on real-world production data from an automotive production line. The bottleneck prediction algorithm is evaluated by treating it as a binary classifier problem and adapted the appropriate evaluation metrics. Furthermore, an attempt is made to determine the amount of past data needed for better forecasting the active periods

    Bayesian optimization in ab initio nuclear physics

    Get PDF
    Theoretical models of the strong nuclear interaction contain unknown coupling constants (parameters) that must be determined using a pool of calibration data. In cases where the models are complex, leading to time consuming calculations, it is particularly challenging to systematically search the corresponding parameter domain for the best fit to the data. In this paper, we explore the prospect of applying Bayesian optimization to constrain the coupling constants in chiral effective field theory descriptions of the nuclear interaction. We find that Bayesian optimization performs rather well with low-dimensional parameter domains and foresee that it can be particularly useful for optimization of a smaller set of coupling constants. A specific example could be the determination of leading three-nucleon forces using data from finite nuclei or three-nucleon scattering experiments

    Data-driven algorithm for throughput bottleneck analysis of production systems

    Get PDF
    The digital transformation of manufacturing industries is expected to yield increased productivity. Companies collect large volumes of real-time machine data and are seeking new ways to use it in furthering data-driven decision making. A\ua0challenge for these companies is identifying throughput bottlenecks using the real-time machine data they collect. This paper proposes a data-driven algorithm to better identify bottleneck groups and provide diagnostic insights. The algorithm is based on the active period theory of throughput bottleneck analysis. It integrates available manufacturing execution systems (MES) data from the machines and tests the statistical significance of any bottlenecks detected. The algorithm can be automated to allow data-driven decision making on the shop floor, thus improving throughput. Real-world MES datasets were used to develop and test the algorithm, producing research outcomes useful to\ua0manufacturing industries. This research pushes standards in throughput bottleneck analysis, using an interdisciplinary approach based on production and data sciences

    LEGaTO: first steps towards energy-efficient toolset for heterogeneous computing

    Get PDF
    LEGaTO is a three-year EU H2020 project which started in December 2017. The LEGaTO project will leverage task-based programming models to provide a software ecosystem for Made-in-Europe heterogeneous hardware composed of CPUs, GPUs, FPGAs and dataflow engines. The aim is to attain one order of magnitude energy savings from the edge to the converged cloud/HPC.Peer ReviewedPostprint (author's final draft

    LEGaTO: towards energy-efficient, secure, fault-tolerant toolset for heterogeneous computing

    Get PDF
    LEGaTO is a three-year EU H2020 project which started in December 2017. The LEGaTO project will leverage task-based programming models to provide a software ecosystem for Made-in-Europe heterogeneous hardware composed of CPUs, GPUs, FPGAs and dataflow engines. The aim is to attain one order of magnitude energy savings from the edge to the converged cloud/HPC.Peer ReviewedPostprint (author's final draft

    Finding Predictive Patterns in Historical Stock Data Using Self-Organizing Maps and Particle Swarm Optimization

    No full text
    The purpose of this thesis is to find a new methodology for finding predictive patterns in candlestick charts without any predefining of how these might look. An algorithm combining particle swarm optimization and self-organizing map has been implemented and evaluated. Non-transformed daily open, high, low and close data has been used as input. The algorithm found predictive patterns that statistically significant outperformed random trading. Moreover, interesting properties such as the optimal length of the pattern, target length and similarity of input to found pattern are discussed

    Machine Learning to predict a ship\u27s fuel consumption in seaways

    No full text
    Fatigue cracks can be observed quite frequently on today’s ocean crossing vessels. To ensure the safety of ship structures sailing in the sea, it is important to know the residual fatigue life of these damaged ship structures. In this case, the fracture mechanics theory is often employed to estimate how fast these cracks can propagate along ship structures. However, large uncertainties are always associated with the crack prediction and residual fatigue life analysis. In this study, two uncertainties sources will be investigated, i.e. the reliability of encountered wave environments connected with shipload determinations and different fracture estimation methods for crack propagation analysis. Firstly, different available codes based on fracture mechanic theory are used to compute the stress intensity factor related parameters for crack propagation analysis. The analysis is carried out for both 2D and 3D cases of some typical ship structural details. The comparison is presented to illustrate the uncertainties of crack propagation analysis related with different codes. Furthermore, it is assumed that the structural details will undertake dynamic loading from a containership operated in the North Atlantic. A statistical wave model is used to generate wave environments along recorded ship routes for different years. The uncertainties of crack growth analysis related with encountered weather environments is also investigated in the study. The comparison of these two uncertainties indicated the requirement of further development for the fracture mechanics theory and associated numerical codes, as well as the reliable life-cycle encountered weather environments

    Lower age increases the risk of revision for stemmed and resurfacing shoulder hemi arthroplasty.

    No full text
    Background and purpose - The number of patients where shoulder hemiarthroplasty (SHA) is an option is still substantial. Descriptive analyses performed by the Swedish Shoulder Arthroplasty Registry (SSAR) showed that while patients receiving SHA designs, i.e. resurfacing hemi (RH) and stemmed hemi (SH), reported similar shoulder functionality and quality of life, the revision rate for RH (12%) was larger than for SH (6.7%); this difference was studied. Patients and methods - All primary SHA (n = 1,140) for OA reported to SSAR between 1999 and 2009 were analyzed regarding risk factors for revision and PROM outcome, 950 shoulders with primary OA (POA), and 190 secondary OA (SOA). Mean age was 67.4 years (SD 10.8). PROM including WOOS and EQ-5D were collected at 5 years, until December 31, 2014. Results - 76/950 prostheses because of POA and 16/190 prosthesis because of SOA were revised. Age at primary surgery was the main factor that influenced the risk of revision, lower age increased the risk of revision, and was also the explanation for the difference between SH and RH. We also found that SH and RH had similar outcomes measured by PROM, but the POA group had higher scores than the SOA group with a clinically relevant difference of 10% in WOOS. Interpretation - The risk of revision for SH and RH is similar when adjusted for age and does not depend on primary diagnosis or sex. A lower age increases the risk of revision. Patients suffering from POA experience better shoulder functionality than SOA patients irrespective of implant type
    corecore